CBR for State Value Function Approximation in Reinforcement Learning
نویسندگان
چکیده
CBR is one of the techniques that can be applied to the task of approximating a function over high-dimensional, continuous spaces. In Reinforcement Learning systems a learning agent is faced with the problem of assessing the desirability of the state it finds itself in. If the state space is very large and/or continuous the availability of a suitable mechanism to approximate a value function – which estimates the value of single states – is of crucial importance. In this paper, we investigate the use of case-based methods to realise that task. The approach we take is evaluated in a case study in robotic soccer simulation.
منابع مشابه
A Call Admission Control Scheme Using NeuroEvolution Algorithm in Cellular Networks
This paper proposes an approach for learning call admission control (CAC) policies in a cellular network that handles several classes of traffic with different resource requirements. The performance measures in cellular networks are long term revenue, utility, call blocking rate (CBR) and handoff failure rate (CDR). Reinforcement Learning (RL) can be used to provide the optimal solution, howeve...
متن کاملLearning Continuous Action Models in a Real-Time Strategy Envir
Although several researchers have integrated methods for reinforcement learning (RL) with case-based reasoning (CBR) to model continuous action spaces, existing integrations typically employ discrete approximations of these models. This limits the set of actions that can be modeled, and may lead to non-optimal solutions. We introduce the Continuous Action and State Space Learner (CASSL), an int...
متن کاملA Convergent Reinforcement Learning Algorithm in the Continuous Case: The Finite-Element Reinforcement Learning
This paper presents a direct reinforcement learning algorithm, called Finite-Element Reinforcement Learning, in the continuous case, i.e. continuous state-space and time. The evaluation of the value function enables the generation of an optimal policy for reinforcement control problems, such as target or obstacle problems, viability problems or optimization problems. We propose a continuous for...
متن کاملManifold Representations for Value-Function Approximation in Reinforcement Learning
Reinforcement learning (RL) has shown itself to be a successful paradigm for solving optimal control problems. However, that success has been mostly limited to problems with a finite set of states and actions. The problem of extending reinforcement learning techniques to the continuous state case has received quite a bit of attention in the last few years. One approach to solving reinforcement ...
متن کاملExplicit Manifold Representations for Value-Function Approximation in Reinforcement Learning
We are interested in using reinforcement learning for large, real-world control problems. In particular, we are interested in problems with continuous, multi-dimensional state spaces, in which traditional reinforcement learning approached perform poorly. Value-function approximation addresses some of the problems of traditional algorithms (for example, continuous state spaces), and has been sho...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005